Citation impact or citation rate is a measure of how many times an academic journal article or book or author is citation by other articles, books or authors.
Citations are distributed highly unequally among researchers. In a study based on the Web of Science database across 118 scientific disciplines, the top 1% most-cited authors accounted for 21% of all citations. Between 2000 and 2015, the proportion of citations that went to this elite group grew from 14% to 21%. The highest concentrations of 'citation elite' researchers were in the Netherlands, the United Kingdom, Switzerland and Belgium. 70% of the authors in the Web of Science database have fewer than 5 publications, so that the most-cited authors among the 4 million included in this study constitute a tiny fraction.
However, very high journal impact factor or CiteScore are often based on a small number of very highly cited papers. For instance, most papers in Nature (impact factor 38.1, 2016) were only cited 10 or 20 times during the reference year (see figure). Journals with a lower impact (e.g. PLOS ONE, impact factor 3.1) publish many papers that are cited 0 to 5 times but few highly cited articles.
Journal-level metrics are often misinterpreted as a measure for journal quality or article quality. However, the use of non-article-level metrics to determine the impact of a single article is statistically invalid. Moreover, studies of methodological quality and reliability have found that "reliability of published research works in several fields may be decreasing with increasing journal rank", contrary to widespread expectations.
Citation distribution is skewness for journals because a very small number of articles are driving the vast majority of citations; therefore, some journals have stopped publicizing their impact factor, e.g. the journals of the American Society for Microbiology. Citation counts follow mostly a lognormal distribution, except for the long tail, which is better fit by a power law.
Other journal-level metrics include the Eigenfactor, and the SCImago Journal Rank.
As early as 2004, the BMJ published the number of views for its articles, which was found to be somewhat correlated to citations. In 2008 the Journal of Medical Internet Research began publishing views and Twitter. These "tweetations" proved to be a good indicator of highly cited articles, leading the author to propose a "Twimpact factor", which is the number of Tweets it receives in the first seven days of publication, as well as a Twindex, which is the rank percentile of an article's Twimpact factor.
In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, italic=no, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, Nature and Science proposed citation distributions metrics as alternative to impact factors.
Research suggests the impact of an article can be, partly, explained by superficial factors and not only by the scientific merits of an article. Field-dependent factors are usually listed as an issue to be tackled not only when comparison across disciplines are made, but also when different fields of research of one discipline are being compared. For instance in Medicine among other factors the number of authors, the number of references, the article length, and the presence of a colon in the title influence the impact. Whilst in Sociology the number of references, the article length, and title length are among the factors. Also it is found that scholars engage in ethically questionable behavior in order to inflate the number of citations articles receive.
Automated has changed the nature of citation analysis research, allowing millions of citations to be analyzed for large scale patterns and knowledge discovery. The first example of automated citation indexing was CiteSeer, later to be followed by Google Scholar. More recently, advanced models for a dynamic analysis of citation aging have been proposed. The latter model is even used as a predictive tool for determining the citations that might be obtained at any time of the lifetime of a corpus of publications.
Some researchers also propose that the journal citation rate on Wikipedia, next to the traditional citation index, "may be a good indicator of the work's impact in the field of psychology."
According to Mario Biagioli: "All metrics of scientific evaluation are bound to be abused. Goodhart's law ... states that when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it."
The evidence that Self-archiving ("green") open access articles are cited more than non open access articles is somewhat stronger than the evidence that ("gold") open access journals are cited more than non open access journals.Young, J. S., & Brandes, P. M. (2020). Green and gold open access citation and interdisciplinary advantage: A bibliometric study of two science journals. The Journal of Academic Librarianship, 46(2), 102105. Two reasons for this are that many of the top-cited journals today are still only hybrid open access (author has the option to pay for gold)Torres-Salinas, D., Robinson-Garcia, N., & Moed, H. F. (2019). Disentangling Gold Open Access. In Springer Handbook of Science and Technology Indicators (pp. 129–144). Springer, Cham. and many pure author-pays open access journals today are either of low quality or downright fraudulent "predatory journals," preying on authors' eagerness to publish-or-perish, thereby lowering the average citation counts of open access journals.Björk, B. C., Kanto-Karvonen, S., & Harviainen, J. T. (2020). How frequently are articles in predatory open access journals cited. Publications, 8(2), 17.
|
|